About the project

The couse is taught in R. Every week we will have an assignment to work on for a specific topic. R markdown will be used to reproduce the analysis, which is of great importance in science. Eventually, we will combine all the individual MarkDown files together as a single work diary.

Regression and model validation

Summary of week2 study

  • In week2, I self-studied the data wrangling skills, simple data visulization, single and multi variant(s) regression, regression diagnostics, model validation and prediction from DataCamp;

  • I have learnt not only the actual skills of regression analysis, but also the knowledge of how and why we need to model the data, and how to validate & use the models.

# Loading packages
library(ggplot2)
library(GGally)

1. Read the data

The original data is downloaded from . The sample collection information could be found here. The original data contains 180 observations from 60 variables. There are seven variables extracted from the original dataset by our local script, namely, gender, age, attitude, deep, stra, surf and points.

# Read the new dataset from the loal folder and named it as learning2014.
learning2014 <- read.csv("data_ready_for_analysis_week2.txt")
# The first few heading lines in learning2014.
head(learning2014)
##   gender age attitude     deep  stra     surf points
## 1      F  53      3.7 3.583333 3.375 2.583333     25
## 2      M  55      3.1 2.916667 2.750 3.166667     12
## 3      F  49      2.5 3.500000 3.625 2.250000     24
## 4      M  53      3.5 3.500000 3.125 2.250000     10
## 5      M  49      3.7 3.666667 3.625 2.833333     22
## 6      F  38      3.8 4.750000 3.625 2.416667     21
# learning2014 has 183 rows with 7 columns.
dim(learning2014)
## [1] 183   7
# learning2014 is a dataframe, and it contains 7 variables realated to 183 observations.
str(learning2014)
## 'data.frame':    183 obs. of  7 variables:
##  $ gender  : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
##  $ age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ attitude: num  3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ points  : int  25 12 24 10 22 21 21 31 24 26 ...

The three variables named gender, name and points are extracted from the original data. Points means the total points here.

For the other 4 variables, the mean values are taken from the related columns. For example, attitude is the mean of 10 variables (rowsum(Da+Db+Dc+Dd+De+Df+Dg+Dh+Di+Dj)) from the original dataset. The ten variables are measured from the following five perspectives:

  • Confidence in doing statistics
  • Value of statistics
  • Interest in statistics
  • Confidence in doing math
  • Affect toward statistics

deep stands for three deep approches combined from 12 original variables. It measures:

  • Seeking Meaning
  • Relating Ideas
  • Use of Evidence

stra means strategic approach normalized from 8 original variables. It measures:

  • Organized Studying
  • Time Management

surf stands for surface approach from 12 original variables. It measures:

  • Lack of Purpose
  • Unrelated Memorising
  • Syllabus-boundness

For further information, please check the online information and the local script.

2. Graphical overview of the data

ggpairs(learning2014, mapping = aes(color=gender,alpha=0.3), 
        lower = list(combo =wrap("facethist", bins = 20)),
        upper = list(continuous = wrap("cor", size = 2.5)))

From the pair-wise comparison plot, we got a lot of information about learning2014.

    1. The size of female students is around twice bigger than the male student size. However, the distribution and the mean of female & male group do not show a big difference in most of the variables, except for attitude. It could be observed by the histogram, box and density plots.
    1. We can also check the correlation coeffients between the non-factor varibles (num of col =[2:7]). Regardless of the gender, correlation of attitude vs points, stra vs points are the two highest, with 0.339 and 0.201, respectively. In contrast, surf shows a negtive correlation (-0.112) with the final total points. Besides, deep and attitude are showing a positive correlation (0.135). Taking the gender into consideration, the famele and male students are generally sharing the same correlation trend. The only exception is between deep approach with age, which female group shows a positive trend but negtive for the male students.

3. Fitting the linear regression model

# Fit a multi-variant linear regression model by taking points as target variable and three dependent variables (attitude, stra and surf).
my_model <- lm(points ~ attitude + stra + surf, data = learning2014)

# summary the regression model
summary(my_model)
## 
## Call:
## lm(formula = points ~ attitude + stra + surf, data = learning2014)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -25.151  -3.212   2.233   5.257  13.694 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   4.7781     5.1577   0.926   0.3555    
## attitude      3.7570     0.8308   4.522 1.11e-05 ***
## stra          1.8968     0.7832   2.422   0.0164 *  
## surf         -0.6262     1.1494  -0.545   0.5866    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 8.082 on 179 degrees of freedom
## Multiple R-squared:  0.146,  Adjusted R-squared:  0.1317 
## F-statistic:  10.2 on 3 and 179 DF,  p-value: 3.089e-06

Firstly, we have fitted three denpendent variables(attitude, stra and surf) and a target variable points, based on the absolute correlation coefficient with points, to a three-variant regression model: \[y=b+a1*x1+a2*x2+a3*x3+error\]

The model summary tells us:

  • The Intercept b is 4.7781. It means points will be 4.7781 when attitude, stra and surf are zero.
  • The coefficient of x1 (attitude) is 3.7570. The estimation of it is very significant with near zero p value. The p value basically tells us the probability of accepting null hypothesis, which is attitude has no contribution to points.
  • The coefficient of x2 (stra) is 1.8968, which passed the significance test with the cutoff 0.05.
  • The coefficient of x3 (surf) is -0.6262. The statistic test is not significant for surf, which has around 58% of probablity to accept the null hypothesis.

Thus, we will fit a two-variable model for points:

# Fit a multi-variant linear regression model by taking points as target variable and two dependent variables (attitude and stra).
my_newmodel <- lm(points ~ attitude + stra, data = learning2014)

# summary the regression model
summary(my_newmodel)
## 
## Call:
## lm(formula = points ~ attitude + stra, data = learning2014)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -25.162  -3.029   1.911   5.078  13.720 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   2.6680     3.3993   0.785   0.4336    
## attitude      3.8308     0.8181   4.683 5.56e-06 ***
## stra          1.9394     0.7777   2.494   0.0135 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 8.066 on 180 degrees of freedom
## Multiple R-squared:  0.1446, Adjusted R-squared:  0.1351 
## F-statistic: 15.22 on 2 and 180 DF,  p-value: 7.843e-07
  • The Intercept b is 2.6680. It means points will be 2.6680 when attitude and stra equal zero.
  • The coefficient of x1 (attitude) is 3.8308. It means the attitude value can contribute about 3.8 times to the final points.
  • The coefficient of x2 (stra) is 1.9394. It means the stra value can contribute about 1.9 times to the final points.

4. Model diagnostic

par(mfrow = c(2,2))
plot(my_newmodel,which=c(1,2,5))

Here we used a linear regression model to fit our data. The assumption of it is:

  • The target variable is the linear combination of the dependent parameters.
  • The errors (residuals) are normally distrubuted
  • The errors are not correlated with the target.
  • The errors should have a constant variance.
  • The errors are not dependent on the expaintory variables.

From the diagnostic plots:

    1. The residuals have a constant variance, are resonably linear.
    1. The Normal Q-Q plot shows most of the residuals are normally distributed but it is not the case for the two tails.
    1. Even with outliars. They are not be influential to determine a regression line.

Logistic regression

Summary of week3 study

  • In week3, I self-studied some data wrangling skills (join,mutate etc.), logistic regression, cross validation from DataCamp;

  • Logistic regression is about modeling categorical targets using related variables

## Loading packages
library(dplyr)
## 
## Attaching package: 'dplyr'
## The following object is masked from 'package:GGally':
## 
##     nasa
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
library(tidyr)
library(ggplot2)
library(GGally)
library(boot)

1. Read the data

In this week, we will use logisitic regression to build a model to predict alcohol consumption (AC). The data is processed from the online data. There are two questionaires in the website. I have used 13 variables of both (school,sex,age,address,famsize,Pstatus,Medu,Fedu,Mjob,Fjob,reason,nursery,internet) to join the data. The remaining data is combined as mean if they are numeric or kept if they are other data types.

## Reading data to alc 
data <- "/Users/qingli/Documents/GitHub/IODS-project/Data/processed_alc_data_w3.csv"
alc <- read.csv(data)
colnames(alc)
##  [1] "school"     "sex"        "age"        "address"    "famsize"   
##  [6] "Pstatus"    "Medu"       "Fedu"       "Mjob"       "Fjob"      
## [11] "reason"     "nursery"    "internet"   "guardian"   "traveltime"
## [16] "studytime"  "failures"   "schoolsup"  "famsup"     "paid"      
## [21] "activities" "higher"     "romantic"   "famrel"     "freetime"  
## [26] "goout"      "Dalc"       "Walc"       "health"     "absences"  
## [31] "G1"         "G2"         "G3"         "alc_use"    "high_use"
str(alc)
## 'data.frame':    382 obs. of  35 variables:
##  $ school    : Factor w/ 2 levels "GP","MS": 1 1 1 1 1 1 1 1 1 1 ...
##  $ sex       : Factor w/ 2 levels "F","M": 1 1 1 1 1 2 2 1 2 2 ...
##  $ age       : int  18 17 15 15 16 16 16 17 15 15 ...
##  $ address   : Factor w/ 2 levels "R","U": 2 2 2 2 2 2 2 2 2 2 ...
##  $ famsize   : Factor w/ 2 levels "GT3","LE3": 1 1 2 1 1 2 2 1 2 1 ...
##  $ Pstatus   : Factor w/ 2 levels "A","T": 1 2 2 2 2 2 2 1 1 2 ...
##  $ Medu      : int  4 1 1 4 3 4 2 4 3 3 ...
##  $ Fedu      : int  4 1 1 2 3 3 2 4 2 4 ...
##  $ Mjob      : Factor w/ 5 levels "at_home","health",..: 1 1 1 2 3 4 3 3 4 3 ...
##  $ Fjob      : Factor w/ 5 levels "at_home","health",..: 5 3 3 4 3 3 3 5 3 3 ...
##  $ reason    : Factor w/ 4 levels "course","home",..: 1 1 3 2 2 4 2 2 2 2 ...
##  $ nursery   : Factor w/ 2 levels "no","yes": 2 1 2 2 2 2 2 2 2 2 ...
##  $ internet  : Factor w/ 2 levels "no","yes": 1 2 2 2 1 2 2 1 2 2 ...
##  $ guardian  : Factor w/ 3 levels "father","mother",..: 2 1 2 2 1 2 2 2 2 2 ...
##  $ traveltime: int  2 1 1 1 1 1 1 2 1 1 ...
##  $ studytime : int  2 2 2 3 2 2 2 2 2 2 ...
##  $ failures  : int  0 0 2 0 0 0 0 0 0 0 ...
##  $ schoolsup : Factor w/ 2 levels "no","yes": 2 1 2 1 1 1 1 2 1 1 ...
##  $ famsup    : Factor w/ 2 levels "no","yes": 1 2 1 2 2 2 1 2 2 2 ...
##  $ paid      : Factor w/ 2 levels "no","yes": 1 1 2 2 2 2 1 1 2 2 ...
##  $ activities: Factor w/ 2 levels "no","yes": 1 1 1 2 1 2 1 1 1 2 ...
##  $ higher    : Factor w/ 2 levels "no","yes": 2 2 2 2 2 2 2 2 2 2 ...
##  $ romantic  : Factor w/ 2 levels "no","yes": 1 1 1 2 1 1 1 1 1 1 ...
##  $ famrel    : int  4 5 4 3 4 5 4 4 4 5 ...
##  $ freetime  : int  3 3 3 2 3 4 4 1 2 5 ...
##  $ goout     : int  4 3 2 2 2 2 4 4 2 1 ...
##  $ Dalc      : int  1 1 2 1 1 1 1 1 1 1 ...
##  $ Walc      : int  1 1 3 1 2 2 1 1 1 1 ...
##  $ health    : int  3 3 3 5 5 5 3 1 1 5 ...
##  $ absences  : int  5 3 8 1 2 8 0 4 0 0 ...
##  $ G1        : int  2 7 10 14 8 14 12 8 16 13 ...
##  $ G2        : int  8 8 10 14 12 14 12 9 17 14 ...
##  $ G3        : int  8 8 11 14 12 14 12 10 18 14 ...
##  $ alc_use   : num  1 1 2.5 1 1.5 1.5 1 1 1 1 ...
##  $ high_use  : logi  FALSE FALSE TRUE FALSE FALSE FALSE ...

The explanations for the variables:

  • school - student’s school (binary: ‘GP’ - Gabriel Pereira or ‘MS’ - Mousinho da Silveira) +sex - student’s sex (binary: ‘F’ - female or ‘M’ - male)
  • age - student’s age (numeric: from 15 to 22)
  • address - student’s home address type (binary: ‘U’ - urban or ‘R’ - rural)
  • famsize - family size (binary: ‘LE3’ - less or equal to 3 or ‘GT3’ - greater than 3)
  • Pstatus - parent’s cohabitation status (binary: ‘T’ - living together or ‘A’ - apart) + Medu - mother’s education (numeric: 0 - none, 1 - primary education (4th grade), 2 = 5th to 9th grade, 3 = secondary education or 4 = higher education)
  • Fedu - father’s education (numeric: 0 - none, 1 - primary education (4th grade), 2 = 5th to 9th grade, 3 = secondary education or 4 = higher education)
  • Mjob - mother’s job (nominal: ‘teacher’, ‘health’ care related, civil ‘services’ (e.g. administrative or police), ‘at_home’ or ‘other’)
  • Fjob - father’s job (nominal: ‘teacher’, ‘health’ care related, civil ‘services’ (e.g. administrative or police), ‘at_home’ or ‘other’)
  • reason - reason to choose this school (nominal: close to ‘home’, school ‘reputation’, ‘course’ preference or ‘other’)
  • guardian - student’s guardian (nominal: ‘mother’, ‘father’ or ‘other’)
  • traveltime - home to school travel time (numeric: 1 - <15 min., 2 - 15 to 30 min., 3 - 30 min. to 1 hour, or 4 - >1 hour)
  • studytime - weekly study time (numeric: 1 - <2 hours, 2 - 2 to 5 hours, 3 - 5 to 10 hours, or 4 - >10 hours)
  • failures - number of past class failures (numeric: n if 1<=n<3, else 4)
  • schoolsup - extra educational support (binary: yes or no)
  • famsup - family educational support (binary: yes or no)
  • paid - extra paid classes within the course subject (Math or Portuguese) (binary: yes or no)
  • activities - extra-curricular activities (binary: yes or no)
  • nursery - attended nursery school (binary: yes or no)
  • higher - wants to take higher education (binary: yes or no)
  • internet - Internet access at home (binary: yes or no)
  • romantic - with a romantic relationship (binary: yes or no)
  • famrel - quality of family relationships (numeric: from 1 - very bad to 5 - excellent)
  • freetime - free time after school (numeric: from 1 - very low to 5 - very high)
  • goout - going out with friends (numeric: from 1 - very low to 5 - very high)
  • Dalc - workday alcohol consumption (numeric: from 1 - very low to 5 - very high)
  • Walc - weekend alcohol consumption (numeric: from 1 - very low to 5 - very high)
  • health - current health status (numeric: from 1 - very bad to 5 - very good)
  • absences - number of school absences (numeric: from 0 to 93)

These grades are related with the course subject, Math or Portuguese:

  • G1 - first period grade (numeric: from 0 to 20)
  • G2 - second period grade (numeric: from 0 to 20)
  • G3 - final grade (numeric: from 0 to 20, output target)

3. Explore the relationship between the selected variables with alcohol consumption (AC)

## high_use vs paid
alc %>% group_by(paid,high_use) %>% summarise(count=n())
## # A tibble: 4 x 3
## # Groups:   paid [?]
##   paid  high_use count
##   <fct> <lgl>    <int>
## 1 no    FALSE      148
## 2 no    TRUE        57
## 3 yes   FALSE      120
## 4 yes   TRUE        57

The student got extra paid or not seemed not affect AC.

## high_use vs goout
high_AC_goout_mean <- mean(alc$goout[alc$high_use == "TRUE"])
low_AC_goout_mean <- mean(alc$goout[alc$high_use == "FALSE"])

print(c("high AC group goout_mean",high_AC_goout_mean,
        "low AC group goout_mean",low_AC_goout_mean))
## [1] "high AC group goout_mean" "3.71929824561404"        
## [3] "low AC group goout_mean"  "2.8544776119403"
g1 <- ggplot(alc, aes(x = high_use, y = goout,col=paid)) + geom_boxplot() +
  ggtitle("goout by high_use") + ylab("go_out freq") +xlab("achohol consumption")

g1

The students who have higher AC are more frequently going out with friends. The mean goout frequency for higher AC group is 3.7 compared with 2.8 in lower AC group.

## high_use vs family relationship
high_AC_famrel_mean <- mean(alc$famrel[alc$high_use == "TRUE"])
low_AC_famrel_mean <- mean(alc$famrel[alc$high_use == "FALSE"])
print(c("high AC group family relationship status:",
        high_AC_famrel_mean,"low AC group family relationship status:",low_AC_famrel_mean))
## [1] "high AC group family relationship status:"
## [2] "3.78070175438596"                         
## [3] "low AC group family relationship status:" 
## [4] "4.00373134328358"
g2 <- ggplot(alc, aes(x = high_use, y = famrel)) + geom_boxplot() +
 ggtitle("family relationship by high_use") +ylab("family relationship status") +xlab("achohol consumption")
g2

The family relationship status in students with higher AC and lower AC does not show a big difference from the mean of both group (3.7 and 4.0, respectively). But it looks different from the boxplot.

## high_use vs absence
high_AC_absence_mean <- mean(alc$health[alc$high_use == "TRUE"])
low_AC_absence_mean <- mean(alc$health[alc$high_use == "FALSE"])

print(c("high_AC group absence times (mean):",high_AC_absence_mean,
        "low_AC group absence times (mean):",low_AC_absence_mean))
## [1] "high_AC group absence times (mean):"
## [2] "3.70175438596491"                   
## [3] "low_AC group absence times (mean):" 
## [4] "3.51865671641791"
g3 <- ggplot(alc, aes(x = high_use, y = absences, col=paid)) +
 geom_boxplot() + ggtitle("absence by high_use") +ylab("absence times") +xlab("achohol consumption")
g3

The absence times in students with higher AC and lower AC does not show a big difference from the mean of both group (3.7 and 3.5, respectively). It slightly changed in boxplot, where the higher AC group shows more absence times.

4. Logistic regression

m <- glm(high_use ~ paid + goout + famrel + absences-1, data = alc, family = "binomial")
#summary(m)
mm <- glm(high_use ~ goout + famrel + absences -1, data = alc, family = "binomial")
#summary(mm)
anova(m, mm, test="LRT")
## Analysis of Deviance Table
## 
## Model 1: high_use ~ paid + goout + famrel + absences - 1
## Model 2: high_use ~ goout + famrel + absences - 1
##   Resid. Df Resid. Dev Df Deviance  Pr(>Chi)    
## 1       377     394.31                          
## 2       379     411.11 -2  -16.795 0.0002254 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
  • Intercept: -2.56121, is the log odds of a student with higher AC when other variables are zero/paid.no.
  • The logistic regression model shows goout and absences are the most two most important explanatory variables in the model with significant low p-value. famrel follows them with less significant p-value. Surprisingly, paid.yes not does contribute to AC that much.
  • Coefficients for the variables (e.g. 0.775 for goout) is the expected change in log odds for a one-unit increase of goout in this case.
  • Coefficient for paiedyes is measured by Wald test which takes yes in categorical variable paid as the reference class compared to those are not.
odd_ratio <- coef(m) %>% exp
CI <- confint(m) %>% exp
## Waiting for profiling to be done...
cbind(odd_ratio,CI)
##           odd_ratio      2.5 %    97.5 %
## paidno   0.07721094 0.02069002 0.2698292
## paidyes  0.10441527 0.02872560 0.3596659
## goout    2.17100371 1.72550406 2.7679697
## famrel   0.70612502 0.53977795 0.9205137
## absences 1.07918726 1.03504151 1.1291747
  • The odds ratio for Intercept(0.07) means the odds of being higher AC is 0.07 times of the odds being lower AC when other variables are zero.
  • The odds ratio for paidyes means the odds for paidyes are about 1.35 times of the odds for paidno, and in the range of values [0.83100065 2.2091855] that you can be 95% certain to find the true mean of the OR.
  • The odds ratios and 95 confidence intervals are listed for other three numeric variables (e.g. goout). The odds of being higher AC is 2.17 times higher than the odds of being lower AC if we increase goout a one-unit higher. The range of [1.72550406 2.7679697] is that we can be 95% certain to find the true mean of the OR.

Therefore, we are only considering the significantly contirbuted variables in our new model which will be used for prediction.

m_new <- glm(high_use ~ goout + famrel + absences, data = alc, family = "binomial")
summary(m_new)
## 
## Call:
## glm(formula = high_use ~ goout + famrel + absences, family = "binomial", 
##     data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.9018  -0.7731  -0.5466   0.9002   2.4180  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -2.38825    0.63161  -3.781 0.000156 ***
## goout        0.77071    0.11981   6.433 1.25e-10 ***
## famrel      -0.34986    0.13534  -2.585 0.009735 ** 
## absences     0.07446    0.02174   3.425 0.000615 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 395.79  on 378  degrees of freedom
## AIC: 403.79
## 
## Number of Fisher Scoring iterations: 4

6. Cross validation for model performance validation

cv <- cv.glm(data = alc, cost = loss_func, glmfit = m_new, K = 10)
cv$delta[1]
## [1] 0.2486911

The prediction error for the current new model is 0.248.

m_tutorial <- glm(high_use ~ failures + absences + sex, data = alc, family = "binomial")
cv_tutorial <- cv.glm(data = alc, cost = loss_func, glmfit = m_tutorial, K = 10)
cv_tutorial$delta[1]
## [1] 0.2513089

The prediction error for the model given in the tutorial is 0.259, which is higher than our current model. Our model shows better performance.

7. Model comparison

## Use 8 variables as predictors
m1 <- glm(high_use ~ failures + absences + sex + paid + goout + famrel + higher + famsize, data = alc, family = "binomial")
## summary the model
summary (m1)
## 
## Call:
## glm(formula = high_use ~ failures + absences + sex + paid + goout + 
##     famrel + higher + famsize, family = "binomial", data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.7657  -0.7238  -0.4910   0.6833   2.6906  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -3.10908    0.90968  -3.418 0.000631 ***
## failures     0.37302    0.21838   1.708 0.087610 .  
## absences     0.08232    0.02250   3.659 0.000253 ***
## sexM         1.01413    0.26839   3.779 0.000158 ***
## paidyes      0.55953    0.26898   2.080 0.037503 *  
## goout        0.74982    0.12473   6.012 1.84e-09 ***
## famrel      -0.37065    0.14234  -2.604 0.009216 ** 
## higheryes   -0.12972    0.57954  -0.224 0.822886    
## famsizeLE3   0.28345    0.28081   1.009 0.312779    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 372.39  on 373  degrees of freedom
## AIC: 390.39
## 
## Number of Fisher Scoring iterations: 4
## For training data
probabilities_m1 <- predict(m1, type = "response")
m1_ave_wrong_pred <- loss_func(class = alc$high_use, prob = probabilities_m1)
m1_ave_wrong_pred
## [1] 0.2146597
## For testing data
cv_m1 <- cv.glm(data = alc, cost = loss_func, glmfit = m1, K = 10)
cv_m1_wrong_pred <- cv_m1$delta[1]
cv_m1_wrong_pred
## [1] 0.2277487

In the above model, we used 8 variables to predict AC. There are 5 variables (absences,sexM,paidyes,goout,famrel) are sifnificantly contributed to AC. The other three failures,higheryes and famsizeLE3are not contributed to AC. For this model, we have the prediction errors are 0.2146597 and 0.2329843 for the training and testing data, which is not too much better than our three varialbes goout + famrel + absences model (0.24).

To draw prediction error for the traning and testing data for different model, we will then creat a dataframe to collect all the data as fellows:

## for the variables numbers 8,7,6,5 and 4 will be used for 5 models. For each of the model, we will collect its prediction error for training and testing data. So there are 10 vectors. 
model_name <- NULL ## model name for each one, identified by their prediction variable numbers
types <- NULL ## to distinguish two types of prediction error for each model
values_collection<- replicate(10,0) ## initialize prediction error vector to ten zeros
## change the fisrt two prediction error values for model8
values_collection[c(1,2)] <- c(m1_ave_wrong_pred,cv_m1_wrong_pred) 

## Change the model names and types using a loop:
for (i in c(8,7,6,5,4)){
  model_name <- c (model_name, replicate(2,paste("model",i,sep='')) )
  types <- c(types, c("traning","testing"))
}

## give the three vectors to our model collection dataframe.
model_collection <- data.frame(model=model_name,pred_error=values_collection,used_data=types)
## print model collection
model_collection
##     model pred_error used_data
## 1  model8  0.2146597   traning
## 2  model8  0.2277487   testing
## 3  model7  0.0000000   traning
## 4  model7  0.0000000   testing
## 5  model6  0.0000000   traning
## 6  model6  0.0000000   testing
## 7  model5  0.0000000   traning
## 8  model5  0.0000000   testing
## 9  model4  0.0000000   traning
## 10 model4  0.0000000   testing

Now, we will call different models and collect their prediction erros to our model collection dataframe.

## for model7:
m <- glm(high_use ~ failures + absences + sex + paid + goout + famrel + higher, data = alc, family = "binomial")
## For training data
probabilities_m <- predict(m, type = "response")
m_ave_wrong_pred <- loss_func(class = alc$high_use, prob = probabilities_m)
## For testing data
cv_m <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = 10)
cv_m_wrong_pred <- cv_m$delta[1]
values_collection[c(3,4)] <- c(m_ave_wrong_pred,cv_m_wrong_pred) 

## for model6:

m <- glm(high_use ~ failures + absences + sex + paid + goout + famrel, data = alc, family = "binomial")
## For training data
probabilities_m <- predict(m, type = "response")
m_ave_wrong_pred <- loss_func(class = alc$high_use, prob = probabilities_m)
## For testing data
cv_m <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = 10)
cv_m_wrong_pred <- cv_m$delta[1]
values_collection[c(5,6)] <- c(m_ave_wrong_pred,cv_m_wrong_pred)

## for model5:

m <- glm(high_use ~ absences + sex + paid + goout + famrel, data = alc, family = "binomial")
## For training data
probabilities_m <- predict(m, type = "response")
m_ave_wrong_pred <- loss_func(class = alc$high_use, prob = probabilities_m)
## For testing data
cv_m <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = 10)
cv_m_wrong_pred <- cv_m$delta[1]
values_collection[c(7,8)] <- c(m_ave_wrong_pred,cv_m_wrong_pred)

## for model4:

m <- glm(high_use ~ absences + sex + goout + famrel, data = alc, family = "binomial")
## For training data
probabilities_m <- predict(m, type = "response")
m_ave_wrong_pred <- loss_func(class = alc$high_use, prob = probabilities_m)
## For testing data
cv_m <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = 10)
cv_m_wrong_pred <- cv_m$delta[1]
values_collection[c(9,10)] <- c(m_ave_wrong_pred,cv_m_wrong_pred)
values_collection
##  [1] 0.2146597 0.2277487 0.2015707 0.2146597 0.2015707 0.2120419 0.2094241
##  [8] 0.2225131 0.2041885 0.2094241

We have collected all the prediction errors for all models. Next, we will draw the figure to compare them.

model_order<-paste ("model",c(8,7,6,5,4),sep="")
model_collection$model <- factor(model_collection$model,levels=as.vector(model_order))
model_collection$pred_error <- values_collection
model_collection
##     model pred_error used_data
## 1  model8  0.2146597   traning
## 2  model8  0.2277487   testing
## 3  model7  0.2015707   traning
## 4  model7  0.2146597   testing
## 5  model6  0.2015707   traning
## 6  model6  0.2120419   testing
## 7  model5  0.2094241   traning
## 8  model5  0.2225131   testing
## 9  model4  0.2041885   traning
## 10 model4  0.2094241   testing
ggplot(model_collection,aes(x=model,y=pred_error,col=used_data, group=used_data)) +geom_point() + geom_line()

The above figure shows the model with decreasing variable numbers behaves better than those higher predictors ones. The reason for that probably is casued by over-fitting when too many variables used in the model. It means higher predictors will not ensure the better performance.


Clustering and classification

Summary of week4 study

  • In week4, I have studied how to use linear discriminant analysis (LDA) to classify a categrical target, and how to use k-means to cluster the samples based on their multivariant observations from DataCamp;

  • A few more keywords for this week are: covariance matrix, correlation matrix, training/test dataset and Euclidean distance

## Loading packages
library(dplyr)
library(tidyr)
library(ggplot2)
library(GGally)
library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select

1. Read the data

In this week, we will use Boston data fom MASS to explore linear discriminant analysis (LDA) and cluster analysis. Boston data frame is about crime rate and its ralated information of Boston, USA:

  • crim: perl capita crime rate by town
  • zn: proportion of residential land zoned for lots over 25,000 sq.ft
  • indus: proportion of non-retail business acres per town.
  • chas: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise).
  • nox: nitrogen oxides concentration (parts per 10 million).
  • rm: average number of rooms per dwelling.
  • age: proportion of owner-occupied units built prior to 1940.
  • dis: weighted mean of distances to five Boston employment centres.
  • rad: index of accessibility to radial highways.
  • tax: full-value property-tax rate per $10,000.
  • ptratio: pupil-teacher ratio by town.
  • black: 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town.
  • lstat: lower status of the population (percent).
  • medv: median value of owner-occupied homes in $1000s.

More detailed data description could be found here.

## Reading data to alc 
data('Boston')
dim(Boston)
## [1] 506  14
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...

Boston has 506 rows and 14 columns. chas and rad are integers, and other variables are float numbers.

2. Summary and graphical overview of the data

summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08204   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

From the summary table, we can find the minimun, maximum, mean, median and quantiles for each variable. Also, the observations are in difference scales. We will need to standarlize them by corresponding means and standard deviations(stds).

ggpairs(Boston, mapping = aes(col='tomato',alpha=0.05), 
        lower = list(combo =wrap("facethist", bins = 30)),
        upper = list(continuous = wrap("cor", size = 2.5)))

  • The diaginal of the above figure shows the distribtion of each variable. Average number of rooms (rm) looks like normally distributed, but it is not the case for the remaining observatiopns.
  • We would like to investigate the variables which can be used for crime rate prediction. From the correlation matrix in the above figure, rad, tax and lstat are the highest three variables which positively associated with crim. In contrast, medv and dis are the top two negetive associated variables with the target.
  • In addition, some correlations between variables are worth to mention here, e.g. tax vs rad (0.91), idus vs nox (0.76), age vs nox (0.73), zn vs dis (0.66), rm vs medv (0.69), medv vs lstat (-0.73), dis vs age (-0.74), dis vs nox (-0.76) and lstat vs rm (-0.61).

3. Standarlization of the dataset

## standarlize Boston
boston_scaled <- scale (Boston)
## the above data is matrix, transform it to data.frame
boston_scaled <- as.data.frame(boston_scaled)
summary (boston_scaled)
##       crim                 zn               indus        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202  
##       chas              nox                rm               age         
##  Min.   :-0.2723   Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331  
##  1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366  
##  Median :-0.2723   Median :-0.1441   Median :-0.1084   Median : 0.3171  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059  
##  Max.   : 3.6648   Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164  
##       dis               rad               tax             ptratio       
##  Min.   :-1.2658   Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047  
##  1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876  
##  Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058  
##  Max.   : 3.9566   Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372  
##      black             lstat              medv        
##  Min.   :-3.9033   Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.: 0.2049   1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median : 0.3808   Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4332   3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 0.4406   Max.   : 3.5453   Max.   : 2.9865

The data is standarlized by mean and std of each variable. As we can see, the means of the new data are zero for all of them and also are in the same scale now.

## create a quantile vector of crim from the scaled data
bins <- quantile(boston_scaled$crim)

## create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, label = c("low", "med_low", "med_high", "high"), include.lowest = TRUE)

## drop old crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)

## add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)

## how many rows in the scaled data
n <- nrow(boston_scaled)

## set seed to repeat the random sampling
set.seed(1111)

## randomly sample n indices between (0,n]
ind <- sample(n,  size = n)

## use the top 80% of ramndomly sampled indices as training data
train <- boston_scaled[head(ind,n=0.8*n),]
dim (train)
## [1] 404  14
## use the tail 20% of ramndomly sampled indices as test data
test <- boston_scaled[tail(ind,n=0.2*n),]
dim (test)
## [1] 102  14

5. Predict the target variable in test set

## save the correct classes from test data
correct_classes <- test$crime

## remove the crime variable from test data
test <- dplyr::select(test, -crime)

## predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)

# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       18      10        0    0
##   med_low    7      14        3    0
##   med_high   1      14        9    2
##   high       0       0        0   24

From the above prediction result, we can conclude that our model is able to predict the high crime category with 100% accuracy, followed by med_low and med_high, with 69% (23/(23+3+7)) and 68% (15/(15+4+3)) accuracy. For the low crime rate, the model could make around 50% correct prediction [16/(16+15)]. Overall, the successful rate of our model is (15+23+15+16)/102 = 67.6%, and the error rate is 1-67.6% = 32.4%.

6. Clustering by K-means

## reload the data
data("Boston")
## rescale the data
boston_scaledNew <- scale(Boston)
boston_scaledNew <- as.data.frame(boston_scaledNew)
## calculate the distance matrix with default 'Euclidean distance' method.
boston_scaledNew_dist <- dist(boston_scaledNew)
summary(boston_scaledNew_dist)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970

The minimum and maximum Euclidean distance in boston_scaledNew are 0.13 and 14.39. The mean of the Euclidean distance is 4.91 between the observations.

## set seed to repeat the randomness in K-means
set.seed(1234)

# set the a max number of cluster number for the observations
k_max <- 10

# calculate the total within cluster sum of squares (WCSS)
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaledNew, k)$tot.withinss})

# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')

From the above figure, we choose the n_cluster=2, because it is the elbow position which holds the ralatively lower within cluster sum of squares (WCSS) and smaller cluster number.

# k-means clustering
km <-kmeans(boston_scaledNew, centers = 2)
# plot the Boston dataset with clusters
ggpairs(boston_scaledNew, mapping = aes(col=as.factor(unname(km$cluster)),alpha=0.05), 
        lower = list(combo =wrap("facethist", bins = 30)),
        upper = list(continuous = wrap("cor", size = 2.5)))
## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero

## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero

## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero

## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero

## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero

## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero

## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero

## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero

## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero

## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero

## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero

## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero

## Warning in cor(x, y, method = method, use = use): the standard deviation is
## zero

  • From the above pair-wise comparison plot, we can infer that cluster 1 (more likely to be lower crime rate region) is labeled by red and cluster 2 (more likely to be higher crime rate region) is labeled by blue. Overall, the higher and lower crime rate categories are quite well clustered. However, we can still draw the similar conclusion as the LDA classification result that the higher crime rate region is better clustered than the lower crime rate ones. Because there are some lower crime region (red dots) mislabled as higher crime region (blue dots) by clutering. Similarly, in LDA result, the lower crime region has higher prediction error.
  • Because we used the scaled data (mean for each variable is zero) in the clustering, and what we observed from the above figure is also matched with zero-mean. Since we libeled the samples with the cluster result, it helps us to understand some of individual distributions, e.g. zn, indus, nox, age and rad. From those variables’ distributions, it is easier to distinguish that the distributions are consisted of two types of data.
  • Higher indus (proportion of non-retail business acres per town) is negatively associated with higher crime rate (-0.28). In contrast, Higher rad (index of accessibility to radial highways) is positively associated with higher crime rate (0.40).

8. 3D-LDA plot on train data

model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda.fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
matrix_product <- mutate(matrix_product,crim=train$crime)

## install the 'plotly' package:
## install.packages("plotly") ## only need to run once, so it has been muted here.
library(plotly)
## 
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
## 
##     select
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout
## color by crime categories:
plot_ly(matrix_product,x = ~LD1, y = ~ LD2, z = ~LD3, type= 'scatter3d', mode='markers', color = ~crim)
## there are 4 categories, so I will cluster the train data (using model_predictors, exclude the target `crim`) into four clusters;
set.seed(1234)
km_train <- kmeans(model_predictors, 4)
matrix_product <- mutate (matrix_product, cluster= as.factor(unname(km_train$cluster)))
plot_ly(matrix_product,x = ~LD1, y = ~ LD2, z = ~LD3, type= 'scatter3d', mode='markers', color = ~cluster)

The two 3D-LDA plot are similar with each other, high -> 2, median high -> 1, median low -> 3 and low -> 4. From the two 3-d plots, the high crime rate is better distinguished from the other three groups. The median low is in the middle of median high and low group, which is also understandable.


Dimensionality reduction techniques

Summary of week5 study

  • In week5, I have studied how to use principle component analysis(PCA),correspondence analysis (CA) and multiple correspondence analysis (MCA) to conduct the dimentionality reduction of multivariate data.
  • The aim of this kind of analysis is to reduce the complexity of the dataset. The multiple features from the obervations are linearly combined as K components. The K components are uncorrelated with each other and ranked by how much variance of the original data they can explain.
  • The biplot could be used to visulize the observations by rotated axis according to the identified components (normally the fisrt two) with arrows added to adress the correlation relationships bewteen the features and with the components. CA and MCA are used on categorical features for such kind of analysis;
## Loading packages
library(dplyr)
library(tidyr)
library(ggplot2)
library(GGally)
library(corrplot)
## corrplot 0.84 loaded
library(FactoMineR)

1. Read the data

In this week, we will use human development indices data for PCA analysis. Original data is from here.

data<-"/Users/qingli/Documents/GitHub/IODS-project/Data/Human_data_w5.csv"
human <- read.csv(data,row.names = 1)
dim(human) ## 155 observations with 8 fearures
## [1] 155   8

The dataframe has 155 observation of 8 variables:

  • Life.Exp: Life expectancy at birth
  • Edu.Exp: Expected years of schooling
  • GNI: Gross National Income per capita
  • Mat.Mor: Maternal mortality ratio
  • Ado.Birth:Adolescent birth rate
  • Parli.F: Percetange of female representatives in parliament
  • Edu2.FM: Edu2.F / Edu2.M
  • Labo.FM: Labo2.F / Labo2.M

Note: “Edu2.F” = Proportion of females with at least secondary education; “Edu2.M” = Proportion of males with at least secondary education; “Labo.F” = Proportion of females in the labour force; “Labo.M” " Proportion of males in the labour force;

The correlation of the 8 features

## compute the correlation matrix
cor_matrix <- cor(human) 
corrplot(cor_matrix,method="circle",
         type="upper", order="hclust", 
         addCoef.col = "black", # Add coefficient of correlation
         tl.col="black", tl.srt=45, #Text label color and rotation
         # Combine with significance
         # hide correlation coefficient on the principal diagonal
         diag=FALSE)

From above figure, we see Life.Exp vs Edu.exp are positively correlated (r=0.79) with each other, followed by Mat.Mor vs Ado.Birth (r=0.76) and GNI vs Life_Exp(r=0.63). In contrast, Life_Exp is negatively correlated with Mat_Mor and Ado_Birth with r=-0.86 and r=-0.73. Also, Edu.exp is in the opposite trend of Mat_Mor and Ado_Birth as Life_Exp. It is intresting to know that longer life expectancy implies longer expected education time, and there are lower mortality ratio and adolescent birth rate among those people. Overall, there are 5 variables with the abosolute correlation coefficients over 0.5, which means the uncorrelated components by a PCA analysis will provide us a clear way to understand the data.

Distribution of the variables

ggpairs(human, mapping = aes(col='steelblue',alpha=0.05), 
        lower = list(combo =wrap("facethist", bins = 30)),
        upper = list(continuous = wrap("cor", size = 2.5)))

This figure shows the scatter plots of 8 features in human with their correlation coefficients in the upper corner, which are carrying the same information with the correlation plot above but in a different graphical perspective. The diagonal of the figure gives the distributions of the variables. Most of them are not normally distributed by eye. It’s worthy to mention that Edu.Exp and Prali.F are close to be Gaussian distributed.

2. PCA on not standardized data

Next, we will apply PCA on unstandardized human data.

pca_unscaled_human <- prcomp(human)
summary(pca_unscaled_human)
## Importance of components:
##                              PC1      PC2   PC3   PC4   PC5   PC6    PC7
## Standard deviation     1.854e+04 185.5219 25.19 11.45 3.766 1.566 0.1912
## Proportion of Variance 9.999e-01   0.0001  0.00  0.00 0.000 0.000 0.0000
## Cumulative Proportion  9.999e-01   1.0000  1.00  1.00 1.000 1.000 1.0000
##                           PC8
## Standard deviation     0.1591
## Proportion of Variance 0.0000
## Cumulative Proportion  1.0000

For the unscaled human data, the first component is almost could explain all the variance of our data.

biplot(pca_unscaled_human, choices = 1:2,cex = c(0.8, 0.8),col = c("grey40", "deeppink2"))
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

The biplot tells us GNI itself is enough to explain all the variance of the data. Because our data is unscaled, so we will have a look of the summary and the covariance matrix of the data.

summary(human)
##     Life.Exp        Edu.Exp           GNI            Mat.Mor      
##  Min.   :49.00   Min.   : 5.40   Min.   :   581   Min.   :   1.0  
##  1st Qu.:66.30   1st Qu.:11.25   1st Qu.:  4198   1st Qu.:  11.5  
##  Median :74.20   Median :13.50   Median : 12040   Median :  49.0  
##  Mean   :71.65   Mean   :13.18   Mean   : 17628   Mean   : 149.1  
##  3rd Qu.:77.25   3rd Qu.:15.20   3rd Qu.: 24512   3rd Qu.: 190.0  
##  Max.   :83.50   Max.   :20.20   Max.   :123124   Max.   :1100.0  
##    Ado.Birth         Parli.F         Edu2.FM          Labo.FM      
##  Min.   :  0.60   Min.   : 0.00   Min.   :0.1717   Min.   :0.1857  
##  1st Qu.: 12.65   1st Qu.:12.40   1st Qu.:0.7264   1st Qu.:0.5984  
##  Median : 33.60   Median :19.30   Median :0.9375   Median :0.7535  
##  Mean   : 47.16   Mean   :20.91   Mean   :0.8529   Mean   :0.7074  
##  3rd Qu.: 71.95   3rd Qu.:27.95   3rd Qu.:0.9968   3rd Qu.:0.8535  
##  Max.   :204.80   Max.   :57.50   Max.   :1.4967   Max.   :1.0380
var(human)
##                Life.Exp       Edu.Exp           GNI       Mat.Mor
## Life.Exp     69.4232828  1.868220e+01  9.682497e+04 -1.512597e+03
## Edu.Exp      18.6821956  8.067024e+00  3.288345e+04 -4.425512e+02
## GNI       96824.9662547  3.288345e+04  3.438745e+08 -1.944698e+06
## Mat.Mor   -1512.5973775 -4.425512e+02 -1.944698e+06  4.485483e+04
## Ado.Birth  -249.7784240 -8.215424e+01 -4.243095e+05  6.605745e+03
## Parli.F      16.2800884  6.724045e+00  1.900376e+04 -2.176062e+02
## Edu2.FM       1.1597535  4.071587e-01  1.928166e+03 -3.382434e+01
## Labo.FM      -0.2318937  2.671701e-02 -8.013518e+01  1.012323e+01
##               Ado.Birth       Parli.F       Edu2.FM       Labo.FM
## Life.Exp  -2.497784e+02    16.2800884  1.159753e+00 -2.318937e-01
## Edu.Exp   -8.215424e+01     6.7240452  4.071587e-01  2.671701e-02
## GNI       -4.243095e+05 19003.7563259  1.928166e+03 -8.013518e+01
## Mat.Mor    6.605745e+03  -217.6061751 -3.382434e+01  1.012323e+01
## Ado.Birth  1.690201e+03   -33.4746473 -5.259401e+00  9.819617e-01
## Parli.F   -3.347465e+01   131.9683058  2.182832e-01  5.714106e-01
## Edu2.FM   -5.259401e+00     0.2182832  5.838970e-02  4.593874e-04
## Labo.FM    9.819617e-01     0.5714106  4.593874e-04  3.951293e-02

Without scaling, covariance matrix of the data will be used to compute the principle components. The The diagonal of the covariance matrix stands for the std of each variable. As we can see, the std of GNI(gross income) is 3.438745e+08 which is the largest in the whole dataset. In contrast, std of Edu2.FM and Labo.FM are even far smaller than zero. Because they both are simply the ratios of two variables and both are in the interval (0,1). As PCA is looking for the components which can explain the variance of the data, therefore, for large imblanced stds need to be standardized before applying PCA on the data.

3. PCA on standardized data

Instead of using covariance matrix (PCA on original data), we will use the correlation matrix for the PCA.

scaled_human <- scale(human)
pca_scaled_human <- prcomp(scaled_human)
summary(pca_scaled_human)
## Importance of components:
##                           PC1    PC2     PC3     PC4     PC5     PC6
## Standard deviation     2.0708 1.1397 0.87505 0.77886 0.66196 0.53631
## Proportion of Variance 0.5361 0.1624 0.09571 0.07583 0.05477 0.03595
## Cumulative Proportion  0.5361 0.6984 0.79413 0.86996 0.92473 0.96069
##                            PC7     PC8
## Standard deviation     0.45900 0.32224
## Proportion of Variance 0.02634 0.01298
## Cumulative Proportion  0.98702 1.00000

The new PCA results show that the top 3 principle components could explain 53.61%, 16.24% and 9.57% of the toal variance, respectively. Taken together, these three could cover 79.41% variance of the original dataset.

biplot(pca_scaled_human, choices = 1:2,cex = c(0.8, 0.6),col = c("grey40", "deeppink2"))

  • After scaled the data, we could see more information from the above biplot. From my own perspective, the first component could be explained by the economy development levels of those countries. The countries on the left of the figure (e.g. Qatar, United Arab Emirates, Australia etc.) have higher income per citizen. The ones in the middle, like South Africa, Nepal and Sudan, are the less developed compared to them. However, the countries on the right part of the figure are still under developing. Among the eight variables, Life_Exp, Edu2.FM, GNI and Edu.Exp have very small angles with each other and displied in the same direction of PC1, which implies that better economy development will lead to higher life expentancy, longer education time, higher female education rate. In addition, Ado.Brith and Mat.Mor are in the opposite direction with those four variables, but still are highly correlated with the gross income of individuals.
  • The PC2 of the biplot seems more related to gender gap probabaly caused by culture difference. In Rwanda, Bolivia, Mozambique and some Nordic countries ( e.g. Iceland, Sweden and Norway, Denmark), it is reported that the percentage of females in the goverment position and the the general labor market is quite high worldwidely. But the situation in Iran, Yemen, Jordan and Syrian for example is opposite probably due to the constraint to be and to hire female employees by their culture.

4. MCA on tea dataset

In the last part, we will use tea data to explore MCA, which is to reduce dimension of categorical variables.

## load the data
data(tea)
## check the colnames of tea
colnames(tea)
##  [1] "breakfast"        "tea.time"         "evening"         
##  [4] "lunch"            "dinner"           "always"          
##  [7] "home"             "work"             "tearoom"         
## [10] "friends"          "resto"            "pub"             
## [13] "Tea"              "How"              "sugar"           
## [16] "how"              "where"            "price"           
## [19] "age"              "sex"              "SPC"             
## [22] "Sport"            "age_Q"            "frequency"       
## [25] "escape.exoticism" "spirituality"     "healthy"         
## [28] "diuretic"         "friendliness"     "iron.absorption" 
## [31] "feminine"         "sophisticated"    "slimming"        
## [34] "exciting"         "relaxing"         "effect.on.health"
dim(tea)
## [1] 300  36
str(tea)
## 'data.frame':    300 obs. of  36 variables:
##  $ breakfast       : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
##  $ tea.time        : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
##  $ evening         : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
##  $ lunch           : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
##  $ dinner          : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
##  $ always          : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
##  $ home            : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
##  $ work            : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
##  $ tearoom         : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
##  $ friends         : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
##  $ resto           : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
##  $ pub             : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
##  $ Tea             : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How             : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ sugar           : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ how             : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ where           : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ price           : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
##  $ age             : int  39 45 47 23 48 21 37 36 40 37 ...
##  $ sex             : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
##  $ SPC             : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
##  $ Sport           : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
##  $ age_Q           : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
##  $ frequency       : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
##  $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
##  $ spirituality    : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
##  $ healthy         : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
##  $ diuretic        : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
##  $ friendliness    : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
##  $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ feminine        : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
##  $ sophisticated   : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
##  $ slimming        : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ exciting        : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
##  $ relaxing        : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
##  $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...

tea dataframe has 300 obs. of 36 variables. Except very few of them, most variables are catergorical with ‘TRUE or FALSE’ anwsers. There is one integer variable showing the obs. ages.

# exclude the age variables because it is not catergorical and also we have age_Q covered the information
tea_new <- subset(tea, select = -age )

gather(tea_new) %>% ggplot(aes(value))+geom_bar() + facet_wrap("key", scales = "free") +theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))
## Warning: attributes are not identical across measure variables;
## they will be dropped

We can see that some of the features, e.g. How, effect.on.health, home and price, are very different in each categories. In contrast, breakfast, escape.exoticism and sugar does not show too much difference in different levels.

kept <- c('How', 'Sport','breakfast','lunch','dinner','friends','sex','pub','home','tea.time','frequency')
tea_subset <- select(tea, one_of(kept) )
mca <- MCA(tea_subset, graph = FALSE)
summary(mca)
## 
## Call:
## MCA(X = tea_subset, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6
## Variance               0.175   0.140   0.121   0.106   0.102   0.099
## % of var.             12.830  10.279   8.889   7.744   7.448   7.276
## Cumulative % of var.  12.830  23.109  31.998  39.742  47.190  54.466
##                        Dim.7   Dim.8   Dim.9  Dim.10  Dim.11  Dim.12
## Variance               0.095   0.083   0.081   0.071   0.069   0.065
## % of var.              6.997   6.102   5.914   5.235   5.090   4.738
## Cumulative % of var.  61.463  67.565  73.479  78.713  83.803  88.541
##                       Dim.13  Dim.14  Dim.15
## Variance               0.058   0.055   0.044
## % of var.              4.262   4.004   3.193
## Cumulative % of var.  92.803  96.807 100.000
## 
## Individuals (the 10 first)
##                  Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
## 1             |  0.391  0.292  0.175 | -0.678  1.093  0.525 | -0.002
## 2             |  0.113  0.024  0.012 | -0.770  1.410  0.540 | -0.083
## 3             |  0.194  0.072  0.022 |  0.321  0.246  0.059 |  0.017
## 4             |  1.003  1.916  0.471 | -0.188  0.084  0.017 | -0.443
## 5             |  0.150  0.043  0.028 | -0.591  0.831  0.435 |  0.152
## 6             |  1.022  1.990  0.507 | -0.315  0.236  0.048 | -0.085
## 7             | -0.018  0.001  0.000 |  0.042  0.004  0.001 |  0.241
## 8             |  0.222  0.094  0.036 |  0.014  0.000  0.000 |  0.147
## 9             | -0.159  0.048  0.024 | -0.736  1.289  0.517 |  0.131
## 10            |  0.131  0.033  0.020 | -0.464  0.512  0.245 | -0.205
##                  ctr   cos2  
## 1              0.000  0.000 |
## 2              0.019  0.006 |
## 3              0.001  0.000 |
## 4              0.539  0.092 |
## 5              0.064  0.029 |
## 6              0.020  0.004 |
## 7              0.159  0.047 |
## 8              0.060  0.016 |
## 9              0.047  0.016 |
## 10             0.115  0.048 |
## 
## Categories (the 10 first)
##                   Dim.1     ctr    cos2  v.test     Dim.2     ctr    cos2
## alone         |   0.116   0.458   0.025   2.743 |   0.147   0.913   0.040
## lemon         |  -0.253   0.365   0.008  -1.536 |   0.798   4.540   0.079
## milk          |  -0.189   0.391   0.010  -1.688 |  -0.935  11.898   0.232
## other         |  -0.270   0.114   0.002  -0.822 |   0.428   0.357   0.006
## Not.sportsman |  -0.053   0.058   0.002  -0.750 |   0.313   2.561   0.066
## sportsman     |   0.036   0.039   0.002   0.750 |  -0.211   1.731   0.066
## breakfast     |  -0.512   6.534   0.242  -8.503 |  -0.645  12.969   0.385
## Not.breakfast |   0.472   6.032   0.242   8.503 |   0.596  11.971   0.385
## lunch         |  -0.861   5.648   0.127  -6.171 |   0.247   0.578   0.010
## Not.lunch     |   0.148   0.971   0.127   6.171 |  -0.042   0.099   0.010
##                v.test     Dim.3     ctr    cos2  v.test  
## alone           3.469 |   0.120   0.706   0.027   2.835 |
## lemon           4.849 |  -0.627   3.248   0.049  -3.814 |
## milk           -8.332 |   0.196   0.603   0.010   1.744 |
## other           1.303 |  -1.675   6.315   0.087  -5.094 |
## Not.sportsman   4.448 |  -0.817  20.171   0.451 -11.609 |
## sportsman      -4.448 |   0.552  13.635   0.451  11.609 |
## breakfast     -10.722 |  -0.004   0.001   0.000  -0.066 |
## Not.breakfast  10.722 |   0.004   0.001   0.000   0.066 |
## lunch           1.768 |   1.535  25.906   0.405  11.001 |
## Not.lunch      -1.768 |  -0.264   4.453   0.405 -11.001 |
## 
## Categorical variables (eta2)
##                 Dim.1 Dim.2 Dim.3  
## How           | 0.026 0.273 0.145 |
## Sport         | 0.002 0.066 0.451 |
## breakfast     | 0.242 0.385 0.000 |
## lunch         | 0.127 0.010 0.405 |
## dinner        | 0.239 0.004 0.007 |
## friends       | 0.084 0.263 0.027 |
## sex           | 0.229 0.119 0.036 |
## pub           | 0.151 0.022 0.002 |
## home          | 0.090 0.015 0.125 |
## tea.time      | 0.307 0.058 0.006 |

We have used our selected features for MCA. From the result, we can see that the first two components could expalain 23.1% of the total variance. And we need 10 of 15 components to explain around 80% variance. It implies our selected varaibles are not very correlated with each other.

plot(mca, invisible=c("ind"),habillage = "quali")

From the above figure, Dim1 seems related to the living habbit since breakfast , lunch plus Not dinner are normally for those who wake up early and healthier lifestyle.


Analysis of longitudinal data

Summary of week6 study

  • In week6, I have studied the structure of longitudinal data which has repeated measures or a response variable may be recorded several different occasions over some period of time. In this case, we need to consider the within-subject variations and between-subject variation. Since it is very likely that the repeated measurements of the response variables will be correlated, we will need to assess the effects of this correlations and call the explanatory variables conditioning on this factor.
## Loading packages
library(dplyr)
library(tidyr)
library(ggplot2)
library(GGally)
library(corrplot)
library(FactoMineR)

1. Read the data

In this week, we will use human development indices data for PCA analysis. Original data is from here.

The dataframe has 155 observation of 8 variables:

  • Life.Exp: Life expectancy at birth
  • Edu.Exp: Expected years of schooling
  • GNI: Gross National Income per capita
  • Mat.Mor: Maternal mortality ratio
  • Ado.Birth:Adolescent birth rate
  • Parli.F: Percetange of female representatives in parliament
  • Edu2.FM: Edu2.F / Edu2.M
  • Labo.FM: Labo2.F / Labo2.M